29 research outputs found
Recommended from our members
Hydrogen Fuel Cell Analysis: Lessons Learned from Stationary Power Generation Final Report
This study considered opportunities for hydrogen in stationary applications in order to make recommendations related to RD&D strategies that incorporate lessons learned and best practices from relevant national and international stationary power efforts, as well as cost and environmental modeling of pathways. The study analyzed the different strategies utilized in power generation systems and identified the different challenges and opportunities for producing and using hydrogen as an energy carrier. Specific objectives included both a synopsis/critical analysis of lessons learned from previous stationary power programs and recommendations for a strategy for hydrogen infrastructure deployment. This strategy incorporates all hydrogen pathways and a combination of distributed power generating stations, and provides an overview of stationary power markets, benefits of hydrogen-based stationary power systems, and competitive and technological challenges. The motivation for this project was to identify the lessons learned from prior stationary power programs, including the most significant obstacles, how these obstacles have been approached, outcomes of the programs, and how this information can be used by the Hydrogen, Fuel Cells & Infrastructure Technologies Program to meet program objectives primarily related to hydrogen pathway technologies (production, storage, and delivery) and implementation of fuel cell technologies for distributed stationary power. In addition, the lessons learned address environmental and safety concerns, including codes and standards, and education of key stakeholders
Graz. SchloĂberg.
Blick zum SchloĂberg mit HauptbrĂŒck
The Quality of Response Time Data Inference: A Blinded, Collaborative Assessment of the Validity of Cognitive Models
Most data analyses rely on models. To complement statistical models, psychologists have developed cognitive models, which translate observed variables into psychologically interesting constructs. Response time models, in particular, assume that response time and accuracy are the observed expression of latent variables including 1) ease of processing, 2) response caution, 3) response bias, and 4) non-decision time. Inferences about these psychological factors, hinge upon the validity of the modelsâ parameters. Here, we use a blinded, collaborative approach to assess the validity of such model-based inferences. Seventeen teams of researchers analyzed the same 14 data sets. In each of these two-condition data sets, we manipulated properties of participantsâ behavior in a two-alternative forced choice task. The contributing teams were blind to the manipulations, and had to infer what aspect of behavior was changed using their method of choice. The contributors chose to employ a variety of models, estimation methods, and inference procedures. Our results show that, although conclusions were similar across different methods, these "modelerâs degrees of freedom" did affect their inferences. Interestingly, many of the simpler approaches yielded as robust and accurate inferences as the more complex methods. We recommend that, in general, cognitive models become a typical analysis tool for response time data. In particular, we argue that the simpler models and procedures are sufficient for standard experimental designs. We finish by outlining situations in which more complicated models and methods may be necessary, and discuss potential pitfalls when interpreting the output from response time models
Newsvendor solutions with general random yield distributions
Most systems are characterized by uncertainties that cause throughput to be highly variable, for example, many modern production processes and services are substantially affected by random yields. When yield is random, not only is the usable quantity uncertain, but the random yield reduces usable capacity and throughput in the system. For these reasons, strategies are needed that incorporate random yield. This paper presents the analysis of the newsvendor model with a general random yield distribution, including the derivation of the optimal order quantity. Results are shown to converge to the basic newsvendor model for the case of perfect yield, and are further demonstrated using the case of general multiplicative random yield. Results have significant impact on both manufacturing and service sectors since the newsvendor model applies to many real-world situations
Finite Buffer Polling Models with Routing
This paper analyzes a finite buffer polling system with routing. Finite buffers are used to model the limited capacity of the system, and routing is used to represent the need for additional service. The most significant result of the analysis is the derivation of the generating function for queue length when buffer sizes are limited and a representation of the system workload. The queue lengths at polling instants are determined by solving a system of recursive equations, and an embedded Markov chain analysis and numerical inversion are used to derive the queue length distributions. This system may be used to represent production models with setups and lost sales or expediting
Residential Energy Performance Metrics
Techniques for residential energy monitoring are an emerging field that is currently drawing significant attention. This paper is a description of the current efforts to monitor and compare the performance of three solar powered homes built at Missouri University of Science and Technology. The homes are outfitted with an array of sensors and a data logger system to measure and record electricity production, system energy use, internal home temperature and humidity, hot water production, and exterior ambient conditions the houses are experiencing. Data is being collected to measure the performance of the houses, compare to energy modeling programs, design and develop cost effective sensor systems for energy monitoring, and produce a cost effective home control system
A review of simheuristics: Extending metaheuristics to deal with stochastic combinatorial optimization problems
Many combinatorial optimization problems (COPs) encountered in real-world logistics, transportation, production, healthcare, financial, telecommunication, and computing applications are NP-hard in nature. These real-life COPs are frequently characterized by their large-scale sizes and the need for obtaining high-quality solutions in short computing times, thus requiring the use of metaheuristic algorithms. Metaheuristics benefit from different random-search and parallelization paradigms, but they frequently assume that the problem inputs, the underlying objective function, and the set of optimization constraints are deterministic. However, uncertainty is all around us, which often makes deterministic models oversimplified versions of real-life systems. After completing an extensive review of related work, this paper describes a general methodology that allows for extending metaheuristics through simulation to solve stochastic COPs. âSimheuristicsâ allow modelers for dealing with real-life uncertainty in a natural way by integrating simulation (in any of its variants) into a metaheuristic-driven framework. These optimization-driven algorithms rely on the fact that efficient metaheuristics already exist for the deterministic version of the corresponding COP. Simheuristics also facilitate the introduction of risk and/or reliability analysis criteria during the assessment of alternative high-quality solutions to stochastic COPs. Several examples of applications in different fields illustrate the potential of the proposed methodology